# Multi-task Pre-finetuning
Muppet Roberta Base
MIT
A large-scale multi-task representation model achieved through pre-finetuning, based on the RoBERTa architecture, outperforming the original roberta-base on GLUE and question answering tasks
Large Language Model
Transformers English

M
facebook
425
6
Muppet Roberta Large
MIT
A large-scale multi-task pre-finetuned version of the RoBERTa-large model, excelling in GLUE and question-answering tasks, with significant improvements especially on small datasets.
Large Language Model
Transformers English

M
facebook
26
13
Featured Recommended AI Models